46 research outputs found

    An event-triggered smart sensor network architecture

    Get PDF
    A smart transducer is the integration of a sensor/actuator element, a processing unit and a network interface. Smart sensor networks are composed of smart transducer nodes interconnected through a communication network. This paper proposes a new architecture for smart sensor networks, that is driven by events (asynchronous data). The events are derived from a data compression algorithm embedded in the smart sensor, which compresses data from the sensor. The proposed architecture also provides configuration and monitoring data to manage the distributed system

    Performance evaluation of a compression algorithm for wireless sensor networks in monitoring applications

    Get PDF
    Wireless sensor network (WSN) is an emerging technology that targets multiple applications in the different environments. Its infrastructure is composed of a large number of sensor nodes with a limited physical capacity and low cost. The energy consumption must be as optimized as possible in order to extend its lifetime. The use of data compression techniques can be an advantage in the WSN context, once these techniques eliminate transmission of redundant information and consequently can be adopted to minimize the consumption of energy in the sensor nodes. WSN for monitoring applications can benefit from this technique as it may maximize the lifetime of batteries. The main motivation of this paper is to investigate the performance of a data compression algorithm for WSN in the context of monitoring applications. To validate the proposal, simulation experiments have been performed using the Network Simulator (NS-2) tool

    Implementation of an event-triggered smart sensor network architecture based on the IEEE 802.15.4 standard

    Get PDF
    A smart transducer is the integration of a sensor/actuator element, a processing unit, and a network interface. Smart sensor networks are composed of smart transducer nodes interconnected through a communication network. This paper presents an event driven smart sensor network architecture (asynchronous data) and its respective implementation based in the IEEE 802.15.4 standard. The events are derived from a data compression algorithm embedded into the smart sensor, which compresses data from the sensor. The architecture also supports configuration and monitoring activities for the over all distributed system

    An evolving approach to unsupervised and Real-Time fault detection in industrial processes

    Get PDF
    Fault detection in industrial processes is a field of application that has gaining considerable attention in the past few years, resulting in a large variety of techniques and methodologies designed to solve that problem. However, many of the approaches presented in literature require relevant amounts of prior knowledge about the process, such as mathematical models, data distribution and pre-defined parameters. In this paper, we propose the application of TEDA - Typicality and Eccentricity Data Analytics - , a fully autonomous algorithm, to the problem of fault detection in industrial processes. In order to perform fault detection, TEDA analyzes the density of each read data sample, which is calculated based on the distance between that sample and all the others read so far. TEDA is an online algorithm that learns autonomously and does not require any previous knowledge about the process nor any user-defined param-eters. Moreover, it requires minimum computational effort, enabling its use for real-time applications. The efficiency of the proposed approach is demonstrated with two different real world industrial plant data streams that provide “normal” and “faulty” data. The results shown in this paper are very encouraging when compared with traditional fault detection approaches

    A comparative study of autonomous learning outlier detection methods applied to fault detection

    Get PDF
    Outlier detection is a problem that has been largely studied in the past few years due to its great applicability in real world problems (e.g. financial, social, climate, security). Fault detection in industrial processes is one of these problems. In that context, several methods have been proposed in literature to address fault detection. In this paper we propose a comparative analysis of three recently introduced outlier detection methods: RDE, RDE with Forgetting and TEDA. Such methods were applied to the data set provided in DAMADICS benchmark, a very well-known real data tool for fault detection applications. The results, however, can be extended to similar problems of the area. Therewith, in this work we compare the main features of each method as well as the results obtained with them

    Online fault detection based on typicality and eccentricity data analytics

    Get PDF
    Fault detection is a task of major importance in industry nowadays, since that it can considerably reduce the risk of accidents involving human lives, in addition to production and, consequently, financial losses. Therefore, fault detection systems have been largely studied in the past few years, resulting in many different methods and approaches to solve such problem. This paper presents a detailed study on fault detection on industrial processes based on the recently introduced eccentricity and typicality data analytics (TEDA) approach. TEDA is a recursive and non-parametric method, firstly proposed to the general problem of anomaly detection on data streams. It is based on the measures of data density and proximity from each read data point to the analyzed data set. TEDA is an online autonomous learning algorithm that does not require a priori knowledge about the process, is completely free of user- and problem-defined parameters, requires very low computational effort and, thus, is very suitable for real-time applications. The results further presented were generated by the application of TEDA to a pilot plant for industrial process

    Unsupervised classification of data streams based on typicality and eccentricity data analytics

    Get PDF
    In this paper, we propose a novel approach to unsupervised and online data classification. The algorithm is based on the statistical analysis of selected features and development of a self-evolving fuzzy-rule-basis. It starts learning from an empty rule basis and, instead of offline training, it learns “on-the-fly”. It is free of parameters and, thus, fuzzy rules, number, size or radius of the classes do not need to be pre-defined. It is very suitable for the classification of online data streams with realtime constraints. The past data do not need to be stored in memory, since that the algorithm is recursive, which makes it memory and computational power efficient. It is able to handle concept-drift and concept-evolution due to its evolving nature, which means that, not only rules/classes can be updated, but new classes can be created as new concepts emerge from the data. It can perform fuzzy classification/soft-labeling, which is preferred over traditional crisp classification in many areas of application. The algorithm was validated with an industrial pilot plant, where online calculated period and amplitude of control signal were used as input to a fault diagnosis application. The approach, however, is generic and can be applied to different problems and with much higher dimensional inputs. The results obtained from the real data are very significant

    Evaluating time influence over performance of machine-learning-based diagnosis: a case study of Covid-19 Pandemic in Brazil

    Get PDF
    Efficiently recognising severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) symptoms enables a quick and accurate diagnosis to be made, and helps in mitigating the spread of the coronavirus disease 2019. However, the emergence of new variants has caused constant changes in the symptoms associate with COVID-19. These constant changes directly impact the performance of machine-learning-based diagnose. In this context, considering the impact of these changes in symptoms over time is necessary for accurate diagnoses. Thus, in this study, we propose a machine-learning-based approach for diagnosing COVID-19 that considers the importance of time in model predictions. Our approach analyses the performance of XGBoost using two different time-based strategies for model training: month-to-month and accumulated strategies. The model was evaluated using known metrics: accuracy, precision, and recall. Furthermore, to explain the impact of feature changes on model prediction, feature importance was measured using the SHAP technique, an XAI technique. We obtained very interesting results: considering time when creating a COVID-19 diagnostic prediction model is advantageous
    corecore